منابع مشابه
Prefetching Inlines to Improve Web Server Latency
Most HTML documents contain inlines, typically image files that are automatically requested from the server after the original document is parsed by the browser. In this paper, we analyze, through simulation, the potential benefits of having the server also parse the document and pre-fetch into its main memory cache the inlines that will be requested by the remote client. The parameters for the...
متن کاملEvaluating web user perceived latency using server side measurements
The central performance problem in the World Wide Web, in recent years, is user perceived latency. This is the time spent by a user while waiting for a Web page he/she requested. Impatience with poor performance is the most common reason visitors terminate their visit at Web sites. For e-commerce sites, such aban-donment translates into lost revenue. For this reason, measuring the delay experie...
متن کاملNetwork assisted latency reduction for mobile web browsing
To load a webpage, a web browser first downloads the base HTML file of the page in order to discover the list of objects referenced in the page. This process takes roughly one round-trip time and constitutes a significant portion of the web browsing delay on mobile devices as wireless networks suffer from longer transmission and access delays compared to wired networks. In this work, we propose...
متن کاملWeb Access Latency Reduction Using CRF-Based Predictive Caching
Reducing the Web access latency perceived by a Web user has become a problem of interest. Web prefetching and caching are two effective techniques that can be used together to reduce the access latency problem on the Internet. Because the success of Web prefetching mainly relies on the prediction accuracy of prediction methods, in this paper we employ a powerful sequential learning model, Condi...
متن کاملRelationship between guaranteed rate server and latency rate server
To analyze scheduling algorithms in computer networks, two models have been proposed and widely used in the literature, namely the Guaranteed Rate (GR) server model and the Latency Rate (LR) server model. While a lot scheduling algorithms have been proved to belong to GR or LR or both, it is not clear what is the relationship between them. The purpose of this paper is to investigate this relati...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Informatics Control Measurement in Economy and Environment Protection
سال: 2017
ISSN: 2083-0157,2391-6761
DOI: 10.5604/01.3001.0010.5215